46 research outputs found

    Modelling Spreading Process Induced by Agent Mobility in Complex Networks

    Get PDF
    Most conventional epidemic models assume contact-based contagion process. We depart from this assumption and study epidemic spreading process in networks caused by agents acting as carrier of infection. These agents traverse from origins to destinations following specific paths in a network and in the process, infecting the sites they travel across. We focus our work on the Susceptible-Infected-Removed (SIR) epidemic model and use continuous-time Markov chain analysis to model the impact of such agent mobility induced contagion mechanics by taking into account the state transitions of each node individually, as oppose to most conventional epidemic approaches which usually consider the mean aggregated behavior of all nodes. Our approach makes one mean field approximation to reduce complexity from exponential to polynomial. We study both network-wide properties such as epidemic threshold as well as individual node vulnerability under such agent assisted infection spreading process. Furthermore, we provide a first order approximation on the agents’ vulnerability since infection is bi-directional. We compare our analysis of spreading process induced by agent mobility against contact-based epidemic model via a case study on London Underground network, the second busiest metro system in Europe, with real dataset recording commuters’ activities in the system. We highlight the key differences in the spreading patterns between the contact-based vs. agent assisted spreading models. Specifically, we show that our model predicts greater spreading radius than conventional contact-based model due to agents’ movements. Another interesting finding is that, in contrast to contact-based model where nodes located more centrally in a network are proportionally more prone to infection, our model shows no such strict correlation as in our model, nodes may not be highly susceptible even located at the heart of the network and vice versa

    Path-Based Epidemic Spreading in Networks

    Get PDF
    Conventional epidemic models assume omnidirectional contact-based infection. This strongly associates the epidemic spreading process with node degrees. The role of the infection transmission medium is often neglected. In real-world networks, however, the infectious agent as the physical contagion medium usually flows from one node to another via specific directed routes ( path-based infection). Here, we use continuous-time Markov chain analysis to model the influence of the infectious agent and routing paths on the spreading behavior by taking into account the state transitions of each node individually, rather than the mean aggregated behavior of all nodes. By applying a mean field approximation, the analysis complexity of the path-based infection mechanics is reduced from exponential to polynomial. We show that the structure of the topology plays a secondary role in determining the size of the epidemic. Instead, it is the routing algorithm and traffic intensity that determine the survivability and the steady-state of the epidemic. We define an infection characterization matrix that encodes both the routing and the traffic information. Based on this, we derive the critical path-based epidemic threshold below which the epidemic will die off, as well as conditional bounds of this threshold which network operators may use to promote/suppress path-based spreading in their networks. Finally, besides artificially generated random and scale-free graphs, we also use real-world networks and traffic, as case studies, in order to compare the behaviors of contact- and path-based epidemics. Our results further corroborate the recent empirical observations that epidemics in communication networks are highly persistent

    Providing proportional TCP performance by fixed-point approximations over bandwidth on demand satellite networks

    Get PDF
    In this paper we focus on the provision of propor- tional class-based service differentiation to transmission control protocol (TCP) flows in the context of bandwidth on demand(BoD) split-TCP geostationary (GEO) satellite networks. Our approach involves the joint configuration of TCP-Performance Enhancing Proxy (TCP-PEP) agents at the transport layer and the scheduling algorithm controlling the resource allocation at the Medium Access Control (MAC) layer. We show that the two differentiation mechanisms exhibit complementary behavior in achieving the desired differentiation throughout the traffic load space: the TCP-PEPs control differentiation at low and medium system utilization, whereas the MAC scheduler becomes the dominant differentiation factor under high traffic load. The main challenge for the satellite operator is to appropriately configure those two mechanisms to achieve a specific differentiation target for the different classes of TCP flows. To this end, we propose a fixed-point framework to analytically approximate the achieved differentiated TCP performance. We validate the predictive capacity of our analytical method via simulations and show that our approximations closely match the performance of different classes of TCP flows under various scenarios for the network traffic load and configuration of the MAC scheduler and TCP-PEP agent. Satellite network operators could use our approximations as an analytical tool to tune their network

    In-network cache management and resource allocation for information-centric networks

    Get PDF
    We introduce the concept of resource management for in-network caching environments. We argue that in Information-Centric Networking environments, deterministically caching content messages at predefined places along the content delivery path results in unfair and inefficient content multiplexing between different content flows, as well as in significant caching redundancy. Instead, allocating resources along the path according to content flow characteristics results in better use of network resources and therefore, higher overall performance. The design principles of our proposed in-network caching scheme, which we call ProbCache, target these two outcomes, namely reduction of caching redundancy and fair content flow multiplexing along the delivery path. In particular, ProbCache approximates the caching capability of a path and caches contents probabilistically to: 1) leave caching space for other flows sharing (part of) the same path, and 2) fairly multiplex contents in caches along the path from the server to the client. We elaborate on the content multiplexing fairness of ProbCache and find that it sometimes behaves in favor of content flows connected far away from the source, that is, it gives higher priority to flows travelling longer paths, leaving little space to shorter-path flows. We introduce an enhanced version of the main algorithm that guarantees fair behavior to all participating content flows. We evaluate the proposed schemes in both homogeneous and heterogeneous cache size environments and formulate a framework for resource allocation in in-network caching environments. The proposed probabilistic approach to in-network caching exhibits ideal performance both in terms of network resource utilization and in terms of resource allocation fairness among competing content flows. Finally, and in contrast to the expected behavior, we find that the efficient design of ProbCache results in fast convergence to caching of popular content items

    Enhancing multi-source content delivery in content-centric networks with fountain coding

    Get PDF
    Fountain coding has been considered as especially suitable for lossy environments, such as wireless networks, as it provides redundancy while reducing coordination overheads between sender(s) and receiver(s). As such it presents beneficial properties for multi-source and/or multicast communication. In this paper we investigate enhancing/increasing multi-source content delivery efficiency in the context of Content-Centric Networking (CCN) with the usage of fountain codes. In particular, we examine whether the combination of fountain coding with the in-network caching capabilities of CCN can further improve performance. We also present an enhancement of CCN's Interest forwarding mechanism that aims at minimizing duplicate transmissions that may occur in a multi-source transmission scenario, where all available content providers and caches with matching (cached) content transmit data packets simultaneously. Our simulations indicate that the use of fountain coding in CCN is a valid approach that further increases network performance compared to traditional schemes

    Information Resilience in a Network of Caches with Perturbations

    Get PDF
    Caching in a network of caches has been widely investigated for improving information/content delivery efficiency (e.g., for reducing content delivery latency, server load and bandwidth utilization). In this work, we look into another dimension of network of caches – enhancing resilience in information dissemination rather than improving delivery efficiency. The underlying premise is that when information is cached at more locations, its availability is increased and thus, in turn, improve information delivery resiliency. This is especially important for networks with perturbations (e.g., node failures). Considering a general network of caches, we present a collaborative caching framework for maximizing the availability of the information. Specifically, we formulate an optimization problem for maximizing the joint utility of caching nodes in serving content requests in perturbed networks. We first solve the centralized version of the problem and then propose a distributed caching algorithm that approximates the centralized solution. We compare our proposal against different caching schemes under a range of parameters, using both real-world and synthetic network topologies. The results show that our algorithm can significantly improve the joint utility of caching nodes. With our distributed caching algorithm, the achieved caching utility is up to five times higher than greedy caching scheme. Furthermore, our scheme is found to be robust against increasing node failure rate, even for networks with a high number of vulnerable nodes

    Seamless Support of Low Latency Mobile Applications with NFV-Enabled Mobile Edge-Cloud

    Get PDF
    Emerging mobile multimedia applications, such as augmented reality, have stringent latency requirements and high computational cost. To address this, mobile edge-cloud (MEC) has been proposed as an approach to bring resources closer to users. Recently, in contrast to conventional fixed cloud locations, the advent of network function virtualization (NFV) has, with some added cost due to the necessary decentralization, enhanced MEC with new flexibility in placing MEC services to any nodes capable of virtualizing their resources. In this work, we address the question on how to optimally place resources among NFV- enabled nodes to support mobile multimedia applications with low latency requirement and when to adapt the current resource placements to address workload changes. We first show that the placement optimization problem is NP-hard and propose an online dynamic resource allocation scheme that consists of an adaptive greedy heuristic algorithm and a detection mechanism to identify the time when the system will no longer be able to satisfy the applications’ delay requirement. Our scheme takes into account the effect of current existing techniques (i.e., auto- scaling and load balancing). We design and implement a realistic NFV-enabled MEC simulated framework and show through ex- tensive simulations that our proposal always manages to allocate sufficient resources on time to guarantee continuous satisfaction of the application latency requirements under changing workload while incurring up to 40% less cost in comparison to existing overprovisioning approaches

    QoE-Assured 4K HTTP live streaming via transient segment holding at mobile edge

    Get PDF
    HTTP-based live streaming has become increasingly popular in recent years, and more users have started generating 4K live streams from their devices (e.g., mobile phones) through social-media service providers like Facebook or YouTube. If the audience is located far from a live stream source across the global Internet, TCP throughput becomes substantially suboptimal due to slow-start and congestion control mechanisms. This is especially the case when the end-to-end content delivery path involves radio access network (RAN) at the last mile. As a result, the data rate perceived by a mobile receiver may not meet the high requirement of 4K video streams, which causes deteriorated Quality-of-Experience (QoE). In this paper, we propose a scheme named Edge-based Transient Holding of Live sEgment (ETHLE), which addresses the issue above by performing context-aware transient holding of video segments at the mobile edge with virtualized content caching capability. Through holding the minimum number of live video segments at the mobile edge cache in a context-aware manner, the ETHLE scheme is able to achieve seamless 4K live streaming experiences across the global Internet by eliminating buffering and substantially reducing initial startup delay and live stream latency. It has been deployed as a virtual network function at an LTE-A network, and its performance has been evaluated using real live stream sources that are distributed around the world. The significance of this paper is that by leveraging on virtualized caching resources at the mobile edge, we have addressed the conventional transport-layer bottleneck and enabled QoE-assured Internet-wide live streaming to support the emerging live streaming services with high data rate requirements

    Cost-efficient Low Latency Communication Infrastructure for Synchrophasor Applications in Smart Grids

    Get PDF
    With the introduction of distributed renewable energy resources and new loads, such as electric vehicles, the power grid is evolving to become a highly dynamic system, that necessitates continuous and fine-grained observability of its operating conditions. In the context of the medium voltage (MV) grid, this has motivated the deployment of Phasor Measurement Units (PMUs), that offer high precision synchronized grid monitoring, enabling mission-critical applications such as fault detection/location. However, PMU-based applications present stringent delay requirements, raising a significant challenge to the communication infrastructure. In contrast to the high voltage domain, there is no clear vision for the communication and network topologies for the MV grid; a full fledged optical fiber-based communication infrastructure is a costly approach due to the density of PMUs required. In this work, we focus on the support of low-latency PMU-based applications in the MV domain, identifying and addressing the trade-off between communication infrastructure deployment costs and the corresponding performance. We study a large set of real MV grid topologies to get an in-depth understanding of the various key latency factors. Building on the gained insights, we propose three algorithms for the careful placement of high capacity links, targeting a balance between deployment costs and achieved latencies. Extensive simulations demonstrate that the proposed algorithms result in low-latency network topologies while reducing deployment costs by up to 80% in comparison to a ubiquitous deployment of costly high capacity links

    Adaptive Coding and Modulation Aided Mobile Relaying for Millimeter-Wave Flying Ad-Hoc Networks

    Full text link
    The emerging drone swarms are capable of carrying out sophisticated tasks in support of demanding Internet-of-Things (IoT) applications by synergistically working together. However, the target area may be out of the coverage of the ground station and it may be impractical to deploy a large number of drones in the target area due to cost, electromagnetic interference and flight-safety regulations. By exploiting the innate \emph{agility} and \emph{mobility} of unmanned aerial vehicles (UAVs), we conceive a mobile relaying-assisted drone swarm network architecture, which is capable of extending the coverage of the ground station and enhancing the effective end-to-end throughput. Explicitly, a swarm of drones forms a data-collecting drone swarm (DCDS) designed for sensing and collecting data with the aid of their mounted cameras and/or sensors, and a powerful relay-UAV (RUAV) acts as a mobile relay for conveying data between the DCDS and a ground station (GS). Given a time period, in order to maximize the data delivered whilst minimizing the delay imposed, we harness an ϵ\epsilon-multiple objective genetic algorithm (ϵ\epsilon-MOGA) assisted Pareto-optimization scheme. Our simulation results demonstrate that the proposed mobile relaying is capable of delivering more data. As specific examples investigated in our simulations, our mobile relaying-assisted drone swarm network is capable of delivering 45.38%45.38\% more data than the benchmark solutions, when a stationary relay is available, and it is capable of delivering 26.86%26.86\% more data than the benchmark solutions when no stationary relay is available
    corecore